Initially, the intention of this project was to implement the
backward (implicit) method with Neumann boundary conditions using the
same methodology as was used for the implicit method with Dirichlet
boundary conditions. However, after many attempts and modifications to
make this scheme work, the following implementaton ultimately still
proved flawed.
As a result, implementation of the implicit method with Neumann
boundary conditions using the numpy linalg solve function was kept in
the main body of this project and the attempt to utilize diagonals, now
named NeumannWithDiag(), was kept in this appendix. A discussion of the
percieved flaws in implementation is also presented.
Implementation
Setting Up the Diagonals Function
Set up the diagonals with Neumann boundary conditions for the
implicit finite difference scheme. Specify the values at the boundary
points by forcing the initial and final points of all 3 diagonals to the
values in the first and last rows of the \(A_{Implicit}\) matrix.
def setUpNeumannDiagonals(N, rho):
# lower, diagonal, upper
l = np.zeros((N,1))
d = np.zeros((N,1))
u = np.zeros((N,1))
for i in range(N):
l[i] = -rho
d[i] = 1 + 2*rho
u[i] = -rho
# Neumann boundary contitions:
# Left
u[0] = -4*rho
d[0] = 1+(7/2)*rho
l[0] = 0.5*rho
# Right
u[-1] = 0.5*rho
d[-1] = 1+(7/2)*rho
l[-1] = -4*rho
return l, d, u
Tri-Diagonal Solver for Implicit Scheme with Neumann Boundary
Conditions
Solves the system of equations that setUpDiagonals(N) gives us, using
Gaussian elimination to create true upper, middle, and lower
diagonals.
def NeumannDiagSolver(l, d, u, b):
N = len(d)
lL, dD, uU, bB = map(np.array, (l, d, u, b)) # copy arrays
# Upward Gaussian Elimination
tmp = lL[0]/uU[1]
dD[0] = dD[0] - tmp*lL[1]
uU[0] = uU[0] - tmp*dD[1]
bB[0] = bB[0] - tmp*bB[1]
# Downward Gaussian Elimination
tmp = u[-1]/l[-2]
lL[-1] = lL[-1] - tmp*dD[-2]
dD[-1] = dD[-1] - tmp*uU[-2]
bB[-1] = bB[-1] - tmp*bB[-2]
for i in range(0, N):
tmp = lL[i]/dD[i-1]
dD[i] = dD[i] - tmp*uU[i-1]
bB[i] = bB[i] - tmp*bB[i-1]
x = dD
x[-1] = bB[-1]/dD[-1]
for il in range(N-3, -1, -1):
x[il] = (bB[il]-uU[il]*x[il+1])/dD[il]
return x
NeumannWithDiag() Function
def NeumannWithDiag(Stock_or_Rate, M, plot=True):
# M = number of simulations/iterations - if M is large enough it will go very little ahead in time for each iteration
x, u = Stock(Stock_or_Rate) # Stock Prices
res = np.zeros((len(u), M)) # Initialize matrix res, which holds the vector of stock prices generated for each iteration
u0 = u[0]
uend = u[-1]
N = len(u) # set N equal to the length of the u-vector
dX = (b-a)/N # the distance between each closing price - i.e. time between daily closing prices
dT = T/M # the distance in time between each iteration
rho = kappa*dT/(dX*dX)
lL, dD, uU = setUpNeumannDiagonals(N, rho)
if plot:
plt.figure(figsize=(20,10))
plt.plot(x, u)
for j in range(M):
u[0]=u0
u[-1]=uend
u = NeumannDiagSolver(lL, dD, uU, u)
tmp = np.reshape(u, 129) # Shape the vector of stock prices generated for this iteration to (129,)
res[:,j] = tmp # Store the vector of stock prices generated for this iteration in column j of matrix res
if plot:
plt.plot(x, u)
plt.grid(True)
plt.xlabel('$x$')
plt.ylabel('$u(x,\tau)$')
if plot:
plt.show()
return(res[:,M-1]) # Return the last column of matrix res (The vector of prices from the final iteration)
Test Plot Output of NeumannWithDiag()
The below plots show sample plot output from NeumannWithDiag() for
each stock using 1 iteration. Each appears to work as expected, with the
end points not anchored to the end points of the true stock price curve.
The following sections show how changing the number of iterations (M)
and the kappa value \((\kappa)\) cause
problems with this current implementation.
Reset to Initial Parameters
a = 0 # "start"
b = 1 # "end"
kappa = 0.001 # how much we let the extreme points affect the smoothing
T = 1 # the length of the simulation - for how long are we running the smoothing / how long will the simulation take
COSTCO stock
cost_nwd1 = NeumannWithDiag(COST, 1)

JC Penney Stock
jcp_nwd1 = NeumannWithDiag(JCP, 1)

Apple Stock
aapl_nwd1 = NeumannWithDiag(AAPL, 1)

Tesla Stock
tsla_nwd1 = NeumannWithDiag(TSLA, 1)

Credit Suisse Stock
cs_nwd1 = NeumannWithDiag(CS, 1)

Effect of Changing the Number of Iterations (M) for
NeumannWithDiag()
As the number of iterations M used increases, all of the output
points of NeumannWithDiag() appear to converge with the output of the
implicit method function with Dirichlet boundary conditions. While this
is to be expected for the middle points of the output curve, this should
not be the case for the points close to the left and right boundaries of
the output curves.
This behavior is demonstrated in the plots below, which include
the
- The heat equation smoothing of Credit Suisse stock using
NeumannWithDiag()
- Comparison plots of the NeumannWithDiag() and implicit method using
Dirichlet boundary conditions output
under constant initial conditons for 1, 10, 25, 50, 100, 500, and
1000 iterations.
Function to Plot the Dirichlet and Neumann Curves Generated from
methodUVals()
def compPlt(df, Ticker=0):
plt.figure(figsize=(20,10))
p = plt.grid(True)
p = plt.xlabel('$x$')
p = plt.ylabel('$u(x,\tau)$')
if isinstance(Ticker, pd.DataFrame):
u = Stock(Ticker)[1]
p = plt.plot(df.x, u, 'k-', label='True Stock Price')
p = plt.plot(df.x, df.Dirichlet_Price, 'r-', marker='o', label='Dirichlet Price')
p = plt.plot(df.x, df.Neumann_Price, 'b-', marker='o', label='Neumann Price')
p = plt.legend(loc='lower right', fontsize=18)
return(p)
M = 1
nwd1 = NeumannWithDiag(CS, 1)
nwd1_comp = methodDiagUVals(CS, 1)
compPlt(nwd1_comp)
nwd1_comp.iloc[range(0,130,10)].style.hide_index()


M = 10
nwd3 = NeumannWithDiag(CS, 10)
nwd3_comp = methodDiagUVals(CS, 1)
compPlt(nwd3_comp)
nwd3_comp.iloc[range(0,130,10)].style.hide_index()


M = 25
nwd4 = NeumannWithDiag(CS, 25)
nwd4_comp = methodDiagUVals(CS, 25)
compPlt(nwd4_comp)
nwd4_comp.iloc[range(0,130,10)].style.hide_index()


M = 50
nwd5 = NeumannWithDiag(CS, 50)
nwd5_comp = methodDiagUVals(CS, 50)
compPlt(nwd5_comp)
nwd5_comp.iloc[range(0,130,10)].style.hide_index()


M = 100
nwd6 = NeumannWithDiag(CS, 100)
nwd6_comp = methodDiagUVals(CS, 100)
compPlt(nwd6_comp)
nwd6_comp.iloc[range(0,130,10)].style.hide_index()


M = 500
nwd7 = NeumannWithDiag(CS, 500)
nwd7_comp = methodDiagUVals(CS, 500)
compPlt(nwd7_comp)
nwd7_comp.iloc[range(0,130,10)].style.hide_index()


M = 1000
nwd8 = NeumannWithDiag(CS, 1000)
nwd8_comp = methodDiagUVals(CS, 1000)
compPlt(nwd8_comp)
nwd8_comp.iloc[range(0,130,10)].style.hide_index()


Effect of Changing Kappa \(\left( \kappa
\right)\) for NeumannWithDiag()
As is expected, increasing the value of \(\kappa\) enhances smoothing; however, using
too large a \(\kappa\) can cause too
much smoothing to the point that the heat equation smoothing converges
to a straight, horizontal line corresponding to the average price of the
true price curve. On the other hand, using too small a value for \(\kappa\) will lead to very little
smoothing, to the point that the smoothed curve will converge to the
true stock price curve.
What is unexpected in the following plots is how, when using large
\(\kappa\) values, the convergence to a
straight, horizontal line corresponding to the average price of the true
price curve takes multiple iterations rather than both beginning and
ending on the average price of the true price curve.
This is shown in the plots below, which show the heat equation
smoothing of Credit Suisse stock under constant initial conditons using
\(\kappa\) values of 0.00001, 0.0001,
0.001, 0.01, 0.1, 1, 10, 100, 1000.
\(\kappa\) = 0.00001
kappa = 0.00001
ktnd9 = NeumannWithDiag(CS, 100)

\(\kappa\) = 0.0001
kappa = 0.0001
ktnd8 = NeumannWithDiag(CS, 100)

\(\kappa\) = 0.001
kappa = 0.001
ktnd7 = NeumannWithDiag(CS, 100)

\(\kappa\) = 0.01
kappa = 0.01
ktnd6 = NeumannWithDiag(CS, 100)

\(\kappa\) = 0.1
kappa = 0.1
ktnd5 = NeumannWithDiag(CS, 100)

\(\kappa\) = 1
kappa = 1
ktnd4 = NeumannWithDiag(CS, 100)

\(\kappa\) = 10
kappa = 10
ktnd3 = NeumannWithDiag(CS, 100)

\(\kappa\) = 100
kappa = 100
ktnd2 = NeumannWithDiag(CS, 100)

\(\kappa\) = 1000
kappa = 1000
ktnd1 = NeumannWithDiag(CS, 100)
